不规则形状的文本为场景文本检测带来了挑战(STD)。尽管现有的基于轮廓点序列的方法达到了可比的性能,但它们无法涵盖一些高度弯曲的色带样文本线条。它导致文本拟合能力和性病技术应用有限。考虑到上述问题,我们将文本几何特征和生物学结合起来,设计基于天然叶子的文本表示方法(LVT)。具体而言,发现叶静脉是一张普遍定向的图,可以很容易地覆盖各种几何形状。受其启发,我们将文本轮廓视为叶边缘,并通过主,侧向和薄静脉表示。我们进一步构建基于LVT的检测框架,即Leaftext。在文本重建阶段,Leftext模拟了叶片生长过程以重建文本轮廓。它在笛卡尔坐标中生长主要静脉,首先将文本大致定位。然后,沿极坐标的主要静脉生长方向产生侧静脉和细静脉。他们负责分别产生粗轮廓和精炼。考虑到侧静脉对主静脉的深度依赖性,提出了多方向平滑(MOS)以增强主静脉的鲁棒性,以确保可靠的检测结果。此外,我们提出了全球激励损失,以加速侧静脉和薄静脉的预测。消融实验表明,LVT能够精确描绘任意形状的文本并验证MOS和全球激励损失的有效性。比较表明,Leftext优于MSRA-TD500,CTW1500,Total-Text和ICDAR2015数据集的现有最新方法(SOTA)方法。
translated by 谷歌翻译
为了追求全面的性能,最近的文本检测器以牺牲准确性为代价提高了检测速度。他们采用基于收缩面罩的文本表示策略,从而导致检测准确性对收缩罩的高度依赖性。不幸的是,三个缺点会导致不可靠的收缩面罩。具体而言,这些方法试图通过语义信息来加强从背景中对收缩面具的歧视。但是,通过细粒度的目标优化了散焦现象的特征散布现象限制了语义特征的提取。同时,由于收缩面具和边缘都属于文本,因此忽略边缘的细节损失现象阻碍了收缩遮罩与边缘的区分,这会导致模棱两可的收缩面罩边缘。此外,假阳性样品享有带有收缩遮罩的类似视觉特征。他们加剧了收缩面具识别的下降。为了避免上述问题,我们提出了一个受相机变焦过程启发的变焦文本检测器(ZTD)。具体而言,引入了缩放模块(ZOM),以提供粗层的粗颗粒优化目标,以避免使用偏置功能。同时,提出了模块中的缩放(ZIM)以增强边缘识别,以防止细节损失。此外,顺序视觉判别器(SVD)旨在通过顺序和视觉特征抑制假阳性样品。实验验证了ZTD的出色全面性能。
translated by 谷歌翻译
现有的实时文本检测器通过直接缩小掩码来重建文本轮廓,这简化了框架,可以使模型快速运行。然而,对预测的收缩掩模的强烈依赖导致检测结果不稳定。此外,收缩掩模的辨别是Pixelive预测任务。通过收缩掩模监督网络只会失去许多语义上下文,这导致了对收缩掩模的错误检测。为了解决这些问题,我们构建一个有效的文本检测网络,用于文本检测(ASMTD)的自适应缩小掩模,这提高了训练期间的精度并降低了推理过程的复杂性。首先,提出了自适应收缩掩模(ASM)来表示通过收缩掩模和独立的自适应偏移来表示文本。它削弱了文本到收缩掩模的耦合,从而提高了检测结果的稳健性。然后,设计超像素窗口(SPW)以监督网络。它利用每个像素的周围环境来提高预测收缩掩模的可靠性,并且在测试期间不会出现。最后,构造轻量级特征合并分支以降低计算成本。如实验中所示,我们的方法优于现有的现有最先进的(SOTA)方法,以检测准确性和速度在多个基准上。
translated by 谷歌翻译
最近快速的任意形状的文本检测已成为一个有吸引力的研究主题。但是,大多数现有方法都是非实时的,这可能在智能系统中缺少。尽管提出了一些实时文本方法,但检测精度远远落后于非实时方法。为了同时提高检测精度和速度,我们提出了一种新颖的快速准确的文本检测框架,即CM-NET,基于新的文本表示方法和多透视特征(MPF)模块构造。前者可以以高效且坚固的方式通过同心掩模(cm)拟合任意形状的文本轮廓。后者鼓励网络从多个角度来了解更多厘米相关的鉴别特征,并没有提供额外的计算成本。受益于CM和MPF的优点,所提出的CM-Net只需要预测一个CM的文本实例来重建文本轮廓,并与先前的作品相比,在检测精度和速度之间实现最佳平衡。此外,为了确保有效地学习多视角特征,提出了多因素约束损耗。广泛的实验证明了所提出的CM是有效且稳健的拟合任意形状的文本实例,并且还验证了MPF的有效性和对鉴别文本特征识别的影响损失。此外,实验结果表明,所提出的CM-Net优于现有的现有最先进的(SOTA)实时文本检测方法,其均以MSRA-TD500,CTW1500,总文和ICDAR2015的检测速度和准确性。数据集。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译